9 research outputs found

    FV-Train: Quantum Convolutional Neural Network Training with a Finite Number of Qubits by Extracting Diverse Features

    Full text link
    Quantum convolutional neural network (QCNN) has just become as an emerging research topic as we experience the noisy intermediate-scale quantum (NISQ) era and beyond. As convolutional filters in QCNN extract intrinsic feature using quantum-based ansatz, it should use only finite number of qubits to prevent barren plateaus, and it introduces the lack of the feature information. In this paper, we propose a novel QCNN training algorithm to optimize feature extraction while using only a finite number of qubits, which is called fidelity-variation training (FV-Training).Comment: 2 pages, 3 figure

    Scalable Quantum Convolutional Neural Networks

    Full text link
    With the beginning of the noisy intermediate-scale quantum (NISQ) era, quantum neural network (QNN) has recently emerged as a solution for the problems that classical neural networks cannot solve. Moreover, QCNN is attracting attention as the next generation of QNN because it can process high-dimensional vector input. However, due to the nature of quantum computing, it is difficult for the classical QCNN to extract a sufficient number of features. Motivated by this, we propose a new version of QCNN, named scalable quantum convolutional neural network (sQCNN). In addition, using the fidelity of QC, we propose an sQCNN training algorithm named reverse fidelity training (RF-Train) that maximizes the performance of sQCNN

    Quantum Split Neural Network Learning using Cross-Channel Pooling

    Full text link
    In recent years, the field of quantum science has attracted significant interest across various disciplines, including quantum machine learning, quantum communication, and quantum computing. Among these emerging areas, quantum federated learning (QFL) has gained particular attention due to the integration of quantum neural networks (QNNs) with traditional federated learning (FL) techniques. In this study, a novel approach entitled quantum split learning (QSL) is presented, which represents an advanced extension of classical split learning. Previous research in classical computing has demonstrated numerous advantages of split learning, such as accelerated convergence, reduced communication costs, and enhanced privacy protection. To maximize the potential of QSL, cross-channel pooling is introduced, a technique that capitalizes on the distinctive properties of quantum state tomography facilitated by QNNs. Through rigorous numerical analysis, evidence is provided that QSL not only achieves a 1.64\% higher top-1 accuracy compared to QFL but also demonstrates robust privacy preservation in the context of the MNIST classification task

    Neural Architectural Nonlinear Pre-Processing for mmWave Radar-based Human Gesture Perception

    Full text link
    In modern on-driving computing environments, many sensors are used for context-aware applications. This paper utilizes two deep learning models, U-Net and EfficientNet, which consist of a convolutional neural network (CNN), to detect hand gestures and remove noise in the Range Doppler Map image that was measured through a millimeter-wave (mmWave) radar. To improve the performance of classification, accurate pre-processing algorithms are essential. Therefore, a novel pre-processing approach to denoise images before entering the first deep learning model stage increases the accuracy of classification. Thus, this paper proposes a deep neural network based high-performance nonlinear pre-processing method.Comment: 4 pages, 7 figure

    SlimFL: Federated Learning with Superposition Coding over Slimmable Neural Networks

    Full text link
    Federated learning (FL) is a key enabler for efficient communication and computing, leveraging devices' distributed computing capabilities. However, applying FL in practice is challenging due to the local devices' heterogeneous energy, wireless channel conditions, and non-independently and identically distributed (non-IID) data distributions. To cope with these issues, this paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNN). Integrating FL with SNNs is challenging due to time-varying channel conditions and data distributions. In addition, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, which makes SNN ill-suited for FL. Motivated by this, we propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple-width configurations decoded as many times as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations while avoiding inter-width interference during backpropagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also deals with non-IID data distributions and poor channel conditions, which is also corroborated by data-intensive simulations

    FV-Train: Quantum Convolutional Neural Network Training with a Finite Number of Qubits by Extracting Diverse Features (Student Abstract)

    No full text
    Quantum convolutional neural network (QCNN) has just become as an emerging research topic as we experience the noisy intermediate-scale quantum (NISQ) era and beyond. As convolutional filters in QCNN extract intrinsic feature using quantum-based ansatz, it should use only finite number of qubits to prevent barren plateaus, and it introduces the lack of the feature information. In this paper, we propose a novel QCNN training algorithm to optimize feature extraction while using only a finite number of qubits, which is called fidelity-variation training (FV-Training)

    Joint superposition coding and training for federated learning over multi-width neural networks

    No full text
    Abstract This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN) architectures. FL preserves data privacy by exchanging the locally trained models of mobile devices. By adopting SNNs as local models, FL can flexibly cope with the time-varying energy capacities of mobile devices. Combining FL and SNNs is however non-trivial, particularly under wireless connections with time-varying channel conditions. Furthermore, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, so are ill-suited to FL. Motivated by this, we propose a communication and energy efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple width configurations that are decoded as many as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations, while avoiding the inter-width interference during back propagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also can counteract non-IID data distributions and poor channel conditions, which is also corroborated by simulations

    ZnO nanotube waveguide arrays on graphene films for local optical excitation on biological cells

    No full text
    We report on scalable and position-controlled optical nanoprobe arrays using ZnO nanotube waveguides on graphene films for use in local optical excitation. For the waveguide fabrication, position-controlled and well-ordered ZnO nanotube arrays were grown on chemical vapor deposited graphene films with a submicron patterned mask layer and Au prepared between the interspace of nanotubes. Mammalian cells were cultured on the nanotube waveguide arrays and were locally excited by light illuminated through the nanotubes. Fluorescence and optogenetic signals could be excited through the optical nanoprobes. This method offers the ability to investigate cellular behavior with a high spatial resolution that surpasses the current limitation

    SlimFL:federated learning with superposition coding over slimmable neural networks

    No full text
    Abstract Federated learning (FL) is a key enabler for efficient communication and computing, leveraging devicesā€™ distributed computing capabilities. However, applying FL in practice is challenging due to the local devicesā€™ heterogeneous energy, wireless channel conditions, and non-independently and identically distributed (non-IID) data distributions. To cope with these issues, this paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNN). Integrating FL with SNNs is challenging due to time-varying channel conditions and data distributions. In addition, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, which makes SNN ill-suited for FL. Motivated by this, we propose a communication and energy-efficient SNN-based FL (named SlimFL ) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple-width configurations decoded as many times as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations while avoiding inter-width interference during backpropagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also deals with non-IID data distributions and poor channel conditions, which is also corroborated by data-intensive simulations
    corecore